26 research outputs found
Fast Context Adaptation via Meta-Learning
We propose CAVIA for meta-learning, a simple extension to MAML that is less
prone to meta-overfitting, easier to parallelise, and more interpretable. CAVIA
partitions the model parameters into two parts: context parameters that serve
as additional input to the model and are adapted on individual tasks, and
shared parameters that are meta-trained and shared across tasks. At test time,
only the context parameters are updated, leading to a low-dimensional task
representation. We show empirically that CAVIA outperforms MAML for regression,
classification, and reinforcement learning. Our experiments also highlight
weaknesses in current benchmarks, in that the amount of adaptation needed in
some cases is small.Comment: Published at the International Conference on Machine Learning (ICML)
201
Deep Variational Reinforcement Learning for POMDPs
Many real-world sequential decision making problems are partially observable
by nature, and the environment model is typically unknown. Consequently, there
is great need for reinforcement learning methods that can tackle such problems
given only a stream of incomplete and noisy observations. In this paper, we
propose deep variational reinforcement learning (DVRL), which introduces an
inductive bias that allows an agent to learn a generative model of the
environment and perform inference in that model to effectively aggregate the
available information. We develop an n-step approximation to the evidence lower
bound (ELBO), allowing the model to be trained jointly with the policy. This
ensures that the latent state representation is suitable for the control task.
In experiments on Mountain Hike and flickering Atari we show that our method
outperforms previous approaches relying on recurrent neural networks to encode
the past
Ordered Preference Elicitation Strategies for Supporting Multi-Objective Decision Making
In multi-objective decision planning and learning, much attention is paid to
producing optimal solution sets that contain an optimal policy for every
possible user preference profile. We argue that the step that follows, i.e,
determining which policy to execute by maximising the user's intrinsic utility
function over this (possibly infinite) set, is under-studied. This paper aims
to fill this gap. We build on previous work on Gaussian processes and pairwise
comparisons for preference modelling, extend it to the multi-objective decision
support scenario, and propose new ordered preference elicitation strategies
based on ranking and clustering. Our main contribution is an in-depth
evaluation of these strategies using computer and human-based experiments. We
show that our proposed elicitation strategies outperform the currently used
pairwise methods, and found that users prefer ranking most. Our experiments
further show that utilising monotonicity information in GPs by using a linear
prior mean at the start and virtual comparisons to the nadir and ideal points,
increases performance. We demonstrate our decision support framework in a
real-world study on traffic regulation, conducted with the city of Amsterdam.Comment: AAMAS 2018, Source code at
https://github.com/lmzintgraf/gp_pref_elici
Quality Assessment of MORL Algorithms: A Utility-Based Approach
Sequential decision-making problems with multiple objectives occur often in practice. In such settings, the utility of a policy depends on how the user values different trade-offs between the objectives. Such valuations can be expressed by a so-called scalarisation function. However, the exact scalarisation function can be unknown when the agents should learn or plan. Therefore, instead of a single solution, the agents aim to produce a solution set that contains an optimal solution for all possible scalarisations. Because it is often not possible to produce an exact solution set, many algorithms have been proposed that produce approximate solution sets instead. We argue that when comparing these algorithms we should do so on the basis of user utility, and on a wide range of problems. In practice however, comparison of the quality of these algorithms have typically been done with only a few limited benchmarks and metrics that do not directly express the utility for the user. In this paper, we propose two metrics that express either the expected utility, or the maximal utility loss with respect to the optimal solution set. Furthermore, we propose a generalised benchmark in order to compare algorithms more reliably
Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning
To rapidly learn a new task, it is often essential for agents to explore
efficiently -- especially when performance matters from the first timestep. One
way to learn such behaviour is via meta-learning. Many existing methods however
rely on dense rewards for meta-training, and can fail catastrophically if the
rewards are sparse. Without a suitable reward signal, the need for exploration
during meta-training is exacerbated. To address this, we propose HyperX, which
uses novel reward bonuses for meta-training to explore in approximate
hyper-state space (where hyper-states represent the environment state and the
agent's task belief). We show empirically that HyperX meta-learns better
task-exploration and adapts more successfully to new tasks than existing
methods.Comment: Published at the International Conference on Machine Learning (ICML)
202
VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning
Trading off exploration and exploitation in an unknown environment is key to
maximising expected return during learning. A Bayes-optimal policy, which does
so optimally, conditions its actions not only on the environment state but on
the agent's uncertainty about the environment. Computing a Bayes-optimal policy
is however intractable for all but the smallest tasks. In this paper, we
introduce variational Bayes-Adaptive Deep RL (variBAD), a way to meta-learn to
perform approximate inference in an unknown environment, and incorporate task
uncertainty directly during action selection. In a grid-world domain, we
illustrate how variBAD performs structured online exploration as a function of
task uncertainty. We further evaluate variBAD on MuJoCo domains widely used in
meta-RL and show that it achieves higher online return than existing methods.Comment: Published at ICLR 202